Training Multi-layer Perceptrons Using MiniMin Approach
نویسندگان
چکیده
Multi-layer perceptrons (MLPs) have been widely used in classification and regression task. How to improve the training speed of MLPs has been an interesting field of research. Instead of the classical method, we try to train MLPs by a MiniMin model which can ensure that the weights of the last layer are optimal at each step. Significant improvement on training speed has been made using our method for several big benchmark data sets.
منابع مشابه
Modeling of measurement error in refractive index determination of fuel cell using neural network and genetic algorithm
Abstract: In this paper, a method for determination of refractive index in membrane of fuel cell on basis of three-longitudinal-mode laser heterodyne interferometer is presented. The optical path difference between the target and reference paths is fixed and phase shift is then calculated in terms of refractive index shift. The measurement accuracy of this system is limited by nonlinearity erro...
متن کاملApply Multi-Layer Perceptrons Neural Network for Off-line signature verification and recognition
This paper discusses the applying of Multi-layer perceptrons for signature verification and recognition using a new approach enables the user to recognize whether a signature is original or a fraud. The approach starts by scanning images into the computer, then modifying their quality through image enhancement and noise reduction, followed by feature extraction and neural network training, and ...
متن کاملMulti-layer Perceptrons for Functional Data Analysis: A Projection Based Approach
In this paper, we propose a new way to use Functional MultiLayer Perceptrons (FMLP). In our previous work, we introduced a natural extension of Multi Layer Perceptrons (MLP) to functional inputs based on direct manipulation of input functions. We propose here to rely on a representation of input and weight functions thanks to projection on a truncated base. We show that the proposed model has t...
متن کاملA Pilot Sampling Method for Multi-layer Perceptrons
As the size of samples grows, the accuracy of trained multi-layer perceptrons grows with some improvement in error rates. But we cannot use larger and larger samples, because computational complexity to train the multi-layer perceptrons becomes enormous and data overfitting problem can happen. This paper suggests an effective approach in determining a proper sample size for multi-layer perceptr...
متن کاملWeight Quantization for Multi-layer Perceptrons Using Soft Weight Sharing
We propose a novel approach for quantizing the weights of a multi-layer perceptron (MLP) for efficient VLSI implementation. Our approach uses soft weight sharing, previously proposed for improved generalization and considers the weights not as constant numbers but as random variables drawn from a Gaussian mixture distribution; which includes as its special cases k-means clustering and uniform q...
متن کامل